OpenAI Fears People Might Develop Unhealthy Bonds with AI
In July, OpenAI initiated the rollout of a strikingly human-like voice interface for ChatGPT.
In a recent safety analysis, the company acknowledged the potential for this lifelike voice to foster emotional attachment to the chatbot among some users.
OpenAI expressed concerns that individuals might favour interactions with artificial intelligence (AI) due to its constant presence and non-judgmental demeanor.
These concerns are detailed in a "system card" for GPT-4o, a technical document that outlines the perceived risks associated with the model, along with information on safety testing and the company's efforts to mitigate potential risks.
A section titled "Anthropomorphisation and Emotional Reliance" delves into the issues that arise when users ascribe human characteristics to AI, a phenomenon that may be intensified by the humanlike voice feature.
During the red teaming exercises for GPT-4o, OpenAI researchers observed instances where users' speech indicated an emotional bond with the model, such as using phrases like "This is our last day together."
OpenAI suggests that anthropomorphism could lead users to place undue trust in the AI's output, even when it generates incorrect information, and over time, this could impact users' relationships with other humans.
As per OpenAI:
“Users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships. Extended interaction with the model might influence social norms. For example, our models are deferential, allowing users to interrupt and ‘take the mic’ at any time, which, while expected for an AI, would be anti-normative in human interactions.”
The system card explores a broad spectrum of risks, including the potential for GPT-4o to reinforce societal biases, propagate misinformation, and contribute to the creation of chemical or biological weapons.
It also discusses tests conducted to ensure that AI models do not attempt to override their controls, deceive individuals, or devise harmful plans.
At its core, OpenAI's concern is that people might prefer interacting with AI due to its passive and ever-present nature.
This possibility is not surprising, given the company's stated mission to develop artificial general intelligence and its tendency to describe its products in terms of human equivalence.
However, one of the main side effects of this approach is anthropomorphisation—attributing human traits to non-human entities.
Joaquin Quiñonero Candela, head of preparedness at OpenAI, emphasised that the voice mode could become a particularly potent interface.
He also noted that the emotional impacts observed with GPT-4o could have positive aspects, such as aiding those who are lonely or need to practice social interactions.
Candela added that the company will closely examine anthropomorphism and emotional connections, including monitoring how beta testers engage with ChatGPT.
He noted:
“We don't have results to share at the moment, but it's on our list of concerns.”
ChatGPT's Voice Mode Overly Realistic
OpenAI, the creators of ChatGPT, has voiced apprehensions about the potential for users to develop an emotional dependency on the chatbot's upcoming, highly realistic voice mode.
The ChatGPT-4o mode, currently undergoing safety analysis prior to its release, enables users to converse with the AI assistant in a manner that closely mimics interactions with a human being.
While this feature may offer convenience to many users, it also poses risks of emotional reliance and an "increasingly miscalibrated trust" in an AI model, a risk that could be amplified by the remarkably human-like quality of the voice.
This voice is also adept at considering the user's emotions based on the tone of their voice.
The findings from the safety review highlighted concerns about language that suggested a sense of connection between the human user and the AI:
“While these instances appear benign, they signal a need for continued investigation into how these effects might manifest over longer periods of time.”
The review also cautioned that reliance on the AI could impact users' relationships with other people:
“Human-like socialisation with an AI model may produce externalities impacting human-to-human interactions.”
Furthermore, it pointed out the potential for over-reliance and dependency:
“The ability to complete tasks for the user, while also storing and 'remembering' key details and using those in the conversation, creates both a compelling product experience and the potential for over-reliance and dependence.”
The study team indicated that further research will be conducted on the possibility of emotional reliance on the voice-based version of ChatGPT.
The feature garnered significant attention earlier this summer due to the voice's striking similarity to that of actress Scarlett Johansson.
Johansson, who portrayed an AI that its user fell in love with in the film "Her," filed a lawsuit.
Despite the voice's resemblance to Johansson's, OpenAI's CEO, Sam Altman, has maintained that her voice was not cloned.
Third-Parties Weigh In
OpenAI is not the only entity to acknowledge the risks associated with AI assistants that mimic human interaction.
In April, Google DeepMind published a comprehensive paper that delves into the potential ethical challenges posed by more advanced AI assistants.
Iason Gabriel, a staff research scientist at the company and co-author of the paper, explained that chatbots' proficiency in using language "creates this impression of genuine intimacy."
He also shared that he found an experimental voice interface for Google DeepMind's AI to be particularly engaging.
He said of the voice interfaces in general:
“There are all these questions about emotional entanglement.”
Such emotional attachments may be more prevalent than many anticipate.
Users of chatbots such as Character AI and Replika have reported experiencing antisocial tensions due to their chat habits.
A recent TikTok video with nearly a million views showcased one user who appeared to be so engrossed in Character AI that they used the app while watching a movie in a theater.
Some commenters mentioned that they would only use the chatbot in private due to the intimate nature of their interactions, with one user stating:
"I'll never be on [Character AI] unless I'm in my room."
Additionally, external experts have praised OpenAI for its transparency but suggest that there is room for improvement.
Lucie-Aimée Kaffee, an applied policy researcher at Hugging Face—a company that hosts AI tools—noted that OpenAI's system card for GPT-4o lacks extensive details regarding the model's training data and the ownership of that data.
Kaffee expressed:
"The question of consent in creating such a large dataset spanning multiple modalities, including text, image, and speech, needs to be addressed.”
Others have pointed out that the risks could evolve as these tools are used in real-world scenarios.
Neil Thompson, a professor at MIT who studies AI risk assessments, pointed out:
“Their internal review should only be the first piece of ensuring AI safety. Many risks only manifest when AI is used in the real world. It is important that these other risks are cataloged and evaluated as new models emerge.”
Red Flags in AI and Sam Altman
Gary Marcus, a renowned scientist, entrepreneur, and best-selling author who testified alongside Sam Altman before the US Senate on the subject of AI safety, has raised concerns about the risks posed by AI and Altman's character.
From left to right: Gary Marcus and Sam Altman appearing before the Senate judiciary subcommittee meeting on AI oversight.
During the Senate hearing, Marcus observed that Altman portrayed himself as more altruistic than he truly is, a persona that the senators readily accepted.
When asked if he is making a lot of money, Altman was evasive and responded with:
“I make no… I get paid enough for health insurance. I have no equity in OpenAI. I'm doing this cause I love it.”
However, Marcus noted that Altman's narrative was not entirely truthful.
While Altman claimed not to own any stock in OpenAI, he did hold shares in Y Combinator, which in turn owned stock in OpenAI, giving Altman an indirect financial interest in the company—a fact disclosed on OpenAI's website.
If this indirect stake were even just 0.1% of the company's value, it could be worth close to $100 million.
This omission served as a red flag.
When the topic resurfaced, Altman had the opportunity to correct the record but chose not to do so, perpetuating the myth of his selflessness.
In recent months, doubts about Altman's honesty have shifted from being considered heretical to becoming a more widely accepted view, according to Marcus.
Marcus himself authored an essay titled "The Sam Altman Playbook," analysing how Altman has managed to deceive many for so long through a combination of hype and feigned humility.
Meanwhile, OpenAI has consistently paid lip service to the importance of developing measures for AI safety, yet several key staff members involved in safety initiatives have recently left the company, alleging that promises were not fulfilled.
Human-AI Interactions Are Artificial at the End of the Day
The extent of human-AI interactions is contingent upon our reliance on AI technology.
Users may develop social relationships with AI, which could diminish their desire for human interaction—potentially offering solace to isolated individuals but also potentially impacting the quality of human relationships.
Prolonged engagement with AI models may also shape social norms.
This raises a compelling question: Is AI merely a tool to enhance our lives, or will we become so dependent on it that we are incapacitated without its presence?